917 research outputs found

    Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure

    Get PDF
    Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both \emph{in vivo} and \emph{in vitro}. In this paper we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.Comment: Non-final version submitted to Chao

    Model of Low-pass Filtering of Local Field Potentials in Brain Tissue

    Full text link
    Local field potentials (LFPs) are routinely measured experimentally in brain tissue, and exhibit strong low-pass frequency filtering properties, with high frequencies (such as action potentials) being visible only at very short distances (\approx10~μm\mu m) from the recording electrode. Understanding this filtering is crucial to relate LFP signals with neuronal activity, but not much is known about the exact mechanisms underlying this low-pass filtering. In this paper, we investigate a possible biophysical mechanism for the low-pass filtering properties of LFPs. We investigate the propagation of electric fields and its frequency dependence close to the current source, i.e. at length scales in the order of average interneuronal distance. We take into account the presence of a high density of cellular membranes around current sources, such as glial cells. By considering them as passive cells, we show that under the influence of the electric source field, they respond by polarisation, i.e., creation of an induced field. Because of the finite velocity of ionic charge movement, this polarization will not be instantaneous. Consequently, the induced electric field will be frequency-dependent, and much reduced for high frequencies. Our model establishes that with respect to frequency attenuation properties, this situation is analogous to an equivalent RC-circuit, or better a system of coupled RC-circuits. We present a number of numerical simulations of induced electric field for biologically realistic values of parameters, and show this frequency filtering effect as well as the attenuation of extracellular potentials with distance. We suggest that induced electric fields in passive cells surrounding neurons is the physical origin of frequency filtering properties of LFPs.Comment: 10 figs, revised tex file and revised fig

    Does the 1/f frequency-scaling of brain signals reflect self-organized critical states?

    Full text link
    Many complex systems display self-organized critical states characterized by 1/f frequency scaling of power spectra. Global variables such as the electroencephalogram, scale as 1/f, which could be the sign of self-organized critical states in neuronal activity. By analyzing simultaneous recordings of global and neuronal activities, we confirm the 1/f scaling of global variables for selected frequency bands, but show that neuronal activity is not consistent with critical states. We propose a model of 1/f scaling which does not rely on critical states, and which is testable experimentally.Comment: 3 figures, 6 page

    Corticothalamic projections control synchronization in locally coupled bistable thalamic oscillators

    Get PDF
    Thalamic circuits are able to generate state-dependent oscillations of different frequencies and degrees of synchronization. However, only little is known how synchronous oscillations, like spindle oscillations in the thalamus, are organized in the intact brain. Experimental findings suggest that the simultaneous occurrence of spindle oscillations over widespread territories of the thalamus is due to the corticothalamic projections, as the synchrony is lost in the decorticated thalamus. Here we study the influence of corticothalamic projections on the synchrony in a thalamic network, and uncover the underlying control mechanism, leading to a control method which is applicable in wide range of stochastic driven excitable units.Comment: 4 pages with 4 figures (Color online on p.3-4) include

    The Ising Model for Neural Data: Model Quality and Approximate Methods for Extracting Functional Connectivity

    Full text link
    We study pairwise Ising models for describing the statistics of multi-neuron spike trains, using data from a simulated cortical network. We explore efficient ways of finding the optimal couplings in these models and examine their statistical properties. To do this, we extract the optimal couplings for subsets of size up to 200 neurons, essentially exactly, using Boltzmann learning. We then study the quality of several approximate methods for finding the couplings by comparing their results with those found from Boltzmann learning. Two of these methods- inversion of the TAP equations and an approximation proposed by Sessak and Monasson- are remarkably accurate. Using these approximations for larger subsets of neurons, we find that extracting couplings using data from a subset smaller than the full network tends systematically to overestimate their magnitude. This effect is described qualitatively by infinite-range spin glass theory for the normal phase. We also show that a globally-correlated input to the neurons in the network lead to a small increase in the average coupling. However, the pair-to-pair variation of the couplings is much larger than this and reflects intrinsic properties of the network. Finally, we study the quality of these models by comparing their entropies with that of the data. We find that they perform well for small subsets of the neurons in the network, but the fit quality starts to deteriorate as the subset size grows, signalling the need to include higher order correlations to describe the statistics of large networks.Comment: 12 pages, 10 figure

    Action Potential Initiation in the Hodgkin-Huxley Model

    Get PDF
    A recent paper of B. Naundorf et al. described an intriguing negative correlation between variability of the onset potential at which an action potential occurs (the onset span) and the rapidity of action potential initiation (the onset rapidity). This correlation was demonstrated in numerical simulations of the Hodgkin-Huxley model. Due to this antagonism, it is argued that Hodgkin-Huxley-type models are unable to explain action potential initiation observed in cortical neurons in vivo or in vitro. Here we apply a method from theoretical physics to derive an analytical characterization of this problem. We analytically compute the probability distribution of onset potentials and analytically derive the inverse relationship between onset span and onset rapidity. We find that the relationship between onset span and onset rapidity depends on the level of synaptic background activity. Hence we are able to elucidate the regions of parameter space for which the Hodgkin-Huxley model is able to accurately describe the behavior of this system

    Extracting synaptic conductances from single membrane potential traces

    Full text link
    In awake animals, the activity of the cerebral cortex is highly complex, with neurons firing irregularly with apparent Poisson statistics. One way to characterize this complexity is to take advantage of the high interconnectivity of cerebral cortex and use intracellular recordings of cortical neurons, which contain information about the activity of thousands of other cortical neurons. Identifying the membrane potential (Vm) to a stochastic process enables the extraction of important statistical signatures of this complex synaptic activity. Typically, one estimates the total synaptic conductances (excitatory and inhibitory) but this type of estimation requires at least two Vm levels and therefore cannot be applied to single Vm traces. We propose here a method to extract excitatory and inhibitory conductances (mean and variance) from single Vm traces. This "VmT method" estimates conductance parameters using maximum likelihood criteria, under the assumption are that synaptic conductances are described by Gaussian stochastic processes and are integrated by a passive leaky membrane. The method is illustrated using models and is tested on guinea-pig visual cortex neurons in vitro using dynamic-clamp experiments. The VmT method holds promises for extracting conductances from single-trial measurements, which has a high potential for in vivo applications.Comment: Neuroscience (in press
    corecore